228 research outputs found

    A new web interface to facilitate access to corpora: development of the ASLLRP data access interface

    Full text link
    A significant obstacle to broad utilization of corpora is the difficulty in gaining access to the specific subsets of data and annotations that may be relevant for particular types of research. With that in mind, we have developed a web-based Data Access Interface (DAI), to provide access to the expanding datasets of the American Sign Language Linguistic Research Project (ASLLRP). The DAI facilitates browsing the corpora, viewing videos and annotations, searching for phenomena of interest, and downloading selected materials from the website. The web interface, compared to providing videos and annotation files off-line, also greatly increases access by people that have no prior experience in working with linguistic annotation tools, and it opens the door to integrating the data with third-party applications on the desktop and in the mobile space. In this paper we give an overview of the available videos, annotations, and search functionality of the DAI, as well as plans for future enhancements. We also summarize best practices and key lessons learned that are crucial to the success of similar projects

    Computer-based tracking, analysis, and visualization of linguistically significant nonmanual events in American Sign Language (ASL)

    Full text link
    Our linguistically annotated American Sign Language (ASL) corpora have formed a basis for research to automate detection by computer of essential linguistic information conveyed through facial expressions and head movements. We have tracked head position and facial deformations, and used computational learning to discern specific grammatical markings. Our ability to detect, identify, and temporally localize the occurrence of such markings in ASL videos has recently been improved by incorporation of (1) new techniques for deformable model-based 3D tracking of head position and facial expressions, which provide significantly better tracking accuracy and recover quickly from temporary loss of track due to occlusion; and (2) a computational learning approach incorporating 2-level Conditional Random Fields (CRFs), suited to the multi-scale spatio-temporal characteristics of the data, which analyses not only low-level appearance characteristics, but also the patterns that enable identification of significant gestural components, such as periodic head movements and raised or lowered eyebrows. Here we summarize our linguistically motivated computational approach and the results for detection and recognition of nonmanual grammatical markings; demonstrate our data visualizations, and discuss the relevance for linguistic research; and describe work underway to enable such visualizations to be produced over large corpora and shared publicly on the Web

    Cue Integration Using Affine Arithmetic and Gaussians

    Get PDF
    In this paper we describe how the connections between affine forms, zonotopes, and Gaussian distributions help us devise an automated cue integration technique for tracking deformable models. This integration technique is based on the confidence estimates of each cue. We use affine forms to bound these confidences. Affine forms represent bounded intervals, with a well-defined set of arithmetic operations. They are constructed from the sum of several independent components. An n-dimensional affine form describes a complex convex polytope, called a zonotope. Because these components lie in bounded intervals, Lindeberg\u27s theorem, a modified version of the central limit theorem,can be used to justify a Gaussian approximation of the affine form. We present a new expectation-based algorithm to find the best Gaussian approximation of an affine form. Both the new and the previous algorithm run in O(n2m) time, where n is the dimension of the affine form, and m is the number of independent components. The constants in the running time of new algorithm, however, are much smaller, and as a result it runs 40 times faster than the previous one for equal inputs. We show that using the Berry-Esseen theorem it is possible to calculate an upper bound for the error in the Gaussian approximation. Using affine forms and the conversion algorithm, we create a method for automatically integrating cues in the tracking process of a deformable model. The tracking process is described as a dynamical system, in which we model the force contribution of each cue as an affine form. We integrate their Gaussian approximations using a Kalman filter as a maximum likelihood estimator. This method not only provides an integrated result that is dependent on the quality of each on of the cues, but also provides a measure of confidence in the final result. We evaluate our new estimation algorithm in experiments, and we demonstrate our deformable model-based face tracking system as an application of this algorithm

    Contactless and absolute linear displacement detection based upon 3D printed magnets combined with passive radio-frequency identification

    Full text link
    Within this work a passive and wireless magnetic sensor, to monitor linear displacements is proposed. We exploit recent advances in 3D printing and fabricate a polymer bonded magnet with a spatially linear magnetic field component corresponding to the length of the magnet. Regulating the magnetic compound fraction during printing allows specific shaping of the magnetic field distribution. A giant magnetoresistance magnetic field sensor is combined with a radio-frequency identification tag in order to passively monitor the exerted magnetic field of the printed magnet. Due to the tailored magnetic field, a displacement of the magnet with respect to the sensor can be detected within the sub-mm regime. The sensor design provides good flexibility by controlling the 3D printing process according to application needs. Absolute displacement detection using low cost components and providing passive operation, long term stability and longevity renders the proposed sensor system ideal for structural health monitoring applications.Comment: 4 pages, 5 figures, 1 tabl

    Live Captions in Virtual Reality (VR)

    Full text link
    Few VR applications and games implement captioning of speech and audio cues, which either inhibits or prevents access of their application by deaf or hard of hearing (DHH) users, new language learners, and other caption users. Additionally, little to no guidelines exist on how to implement live captioning on VR headsets and how it may differ from traditional television captioning. To help fill the void of information behind user preferences of different VR captioning styles, we conducted a study with eight DHH participants to test three caption movement behaviors (headlocked, lag, and appear) while watching live-captioned, single-speaker presentations in VR. Participants answered a series of Likert scale and open-ended questions about their experience. Participant preferences were split, but the majority of participants reported feeling comfortable with using live captions in VR and enjoyed the experience. When participants ranked the caption behaviors, there was almost an equal divide between the three types tested. IPQ results indicated each behavior had similar immersion ratings, however participants found headlocked and lag captions more user-friendly than appear captions. We suggest that participants may vary in caption preference depending on how they use captions, and that providing opportunities for caption customization is best

    Blob-B-Gone: a lightweight framework for removing blob artifacts from 2D/3D MINFLUX single-particle tracking data

    Get PDF
    In this study, we introduce Blob-B-Gone, a lightweight framework to computationally differentiate and eventually remove dense isotropic localization accumulations (blobs) caused by artifactually immobilized particles in MINFLUX single-particle tracking (SPT) measurements. This approach uses purely geometrical features extracted from MINFLUX-detected single-particle trajectories, which are treated as point clouds of localizations. Employing k-means++ clustering, we perform single-shot separation of the feature space to rapidly extract blobs from the dataset without the need for training. We automatically annotate the resulting sub-sets and, finally, evaluate our results by means of principal component analysis (PCA), highlighting a clear separation in the feature space. We demonstrate our approach using two- and three-dimensional simulations of freely diffusing particles and blob artifacts based on parameters extracted from hand-labeled MINFLUX tracking data of fixed 23-nm bead samples and two-dimensional diffusing quantum dots on model lipid membranes. Applying Blob-B-Gone, we achieve a clear distinction between blob-like and other trajectories, represented in F1 scores of 0.998 (2D) and 1.0 (3D) as well as 0.995 (balanced) and 0.994 (imbalanced). This framework can be straightforwardly applied to similar situations, where discerning between blob and elongated time traces is desirable. Given a number of localizations sufficient to express geometric features, the method can operate on any generic point clouds presented to it, regardless of its origin

    TRAIT2D: a Software for Quantitative Analysis of Single Particle Diffusion Data

    Get PDF
    Single particle tracking (SPT) is one of the most widely used tools in optical microscopy to evaluate particle mobility in a variety of situations, including cellular and model membrane dynamics. Recent technological developments, such as Interferometric Scattering microscopy, have allowed recording of long, uninterrupted single particle trajectories at kilohertz framerates. The resulting data, where particles are continuously detected and do not displace much between observations, thereby do not require complex linking algorithms. Moreover, while these measurements offer more details into the short-term diffusion behaviour of the tracked particles, they are also subject to the influence of localisation uncertainties, which are often underestimated by conventional analysis pipelines. we thus developed a Python library, under the name of TRAIT2D (Tracking Analysis Toolbox – 2D version), in order to track particle diffusion at high sampling rates, and analyse the resulting trajectories with an innovative approach. The data analysis pipeline introduced is more localisation-uncertainty aware, and also selects the most appropriate diffusion model for the data provided on a statistical basis. A trajectory simulation platform also allows the user to handily generate trajectories and even synthetic time-lapses to test alternative tracking algorithms and data analysis approaches. A high degree of customisation for the analysis pipeline, for example with the introduction of different diffusion modes, is possible from the source code. Finally, the presence of graphical user interfaces lowers the access barrier for users with little to no programming experience

    Introducing COSMOS: a Web Platform for Multimodal Game-Based Psychological Assessment Geared Towards Open Science Practice

    Get PDF
    We have established the COgnitive Science Metrics Online Survey (COSMOS) platform that contains a digital psychometrics toolset in the guise of applied games measuring a wide range of cognitive functions. Here, we are outlining this online research endeavor designed for automatized psychometric data collection and scalable assessment: once set up, the low costs and expenditure associated with individual psychometric testing allow substantially increased study cohorts and thus contribute to enhancing study outcome reliability. We are leveraging gamification of the data acquisition method to make the tests suitable for online administration. By putting a strong focus on entertainment and individually tailored feedback, we aim to maximize subjects’ incentives for repeated and continued participation. The objective of measuring repeatedly is obtaining more revealing multitrial average scores and measures from various operationalizations of the same psychological construct instead of relying on single-shot measurements. COSMOS is set up to acquire an automatically and continuously growing dataset that can be used to answer a wide variety of research questions. Following the principles of the open science movement, this data set will also be made accessible to other publicly funded researchers, given that all precautions for individual data protection are fulfilled. We have developed a secure hosting platform and a series of digital gamified testing instruments that can measure theory of mind, attention, working memory, episodic long- and short-term memory, spatial memory, reaction times, eye-hand coordination, impulsivity, humor appreciation, altruism, fairness, strategic thinking, decision-making, and risk-taking behavior. Furthermore, some of the game-based testing instruments also offer the possibility of using classical questionnaire items. A subset of these gamified tests is already implemented in the COSMOS platform, publicly accessible and currently undergoing evaluation and calibration as normative data is being collected. In summary, our approach can be used to accomplish a detailed and reliable psychometric characterization of thousands of individuals to supply various studies with large-scale neurocognitive phenotypes. Our game-based online testing strategy can also guide recruitment for studies as they allow very efficient screening and sample composition. Finally, this setup also allows to evaluate potential cognitive training effects and whether improvements are merely task specific or if generalization effects occur in or even across cognitive domains
    corecore